In preparing a recent article for publication (Gorham and Kelly 2014), I came across an article by Leimu and Koricheva (2005), who “cast doubt on the validity of using citation counts as an objective and unbiased tool for academic evaluation in ecology.” Certainly, in the large research teams prevalent today it would seem obvious that fifth- and sixth-placed authors should not get the same citation credit as first- and second-placed authors. Yet we know that, in a general way, ecologists with tens of thousands of citations to their papers are likely to be rated more highly than their colleagues with hundreds or thousands. And when I read in Ecological Society of America (ESA) Today (Autumn/Winter 2012) of the establishment of an initial group of Fellows of the Ecological Society of America (FESA), it occurred to me that one could investigate this matter further, the more so because that group includes a number of members of the National Academy of Sciences (NAS), who are likely to be regarded as even more accomplished than Fellows of the ESA. By checking the Web, I was able to compile a set of 22 Fellows of the ESA with Google Scholar records, including 12 who are also members of the NAS. For each member of these two groups, I recorded 10 characteristics of their publications: number of citations, h index (maximum number of articles with the same number of citations to it), i 10 index (number of articles with at least 10 citations), total number of articles cited, citations per article, citations per year, articles per year, percentage sole author, percentage first author in a team of two or more, and average team rank for the highest 20 team citations. I also recorded total years of citation per individual. These characteristics are of course unlikely to be of equal merit in assessing reputation. The two sets of ecologists are compared in Table 1. The overall ranges for each characteristic are large; maximum/minimum quotients vary from as low as 3.8 for team rank to as high as 23 for percentage sole author. Because each record states that “dates and citations are estimated and are determined automatically by a computer program,” they are subject to error, and I found a dozen cases of wrong attribution. Even a doubling to two dozen, cases would represent an error rate of only 0.4%. Moreover, the errors I did find did not represent high citation numbers. Because h and i 10 indices are directly computed from the citation counts, I decided not to attempt correction of the wrongfully attributed articles. In dealing with almost 6000 cited articles, I tried to avoid error myself by making my estimates twice, between February 6 and 10 and from March 10 to 11, 2015. Results were averaged except in a few cases where divergence was sufficient to suggest a third check of the data. For each of the variables in Table 1, we can see that there is a distinct contrast between the two groups of ecologists, FESA and FESA + NAS, with the latter yielding (with one exception) the higher numbers although with considerable overlap. The contrast can be assessed as a quotient that divides the median number for NAS + FESA by the median number for FESA (Table 2). The medians are generally higher for FESA + NAS, especially in number of citations (quotient 2.7) and citations per year (quotient 2.4). On the other hand, percentage sole author (quotient 0.9) balances slightly the other way. Team rank (in the top 20 citations for which an individual was a team member) is distinctly lower, and presumably more influential, for FESA. Another way to look at the variables in Table 1 is to rank-order the 22 ecologists for the 10 publication characteristics (excluding years of citation) and then make averages of the ranks to see whether they order differently from ranking by citation counts alone. This is done in Table 3, which shows three distinct groups of individuals. The uppermost, individuals from one to eight, are all FESA + NAS. The second is a mixed group of seven FESA and four FESA + NAS. The lowermost is a group of three FESA. Also, evident is a near-separation into two groups: eleven FESA + NAS members plus one FESA outlier and nine FESA members plus one FESA + NAS outlier. It is notable (and regrettable) that only 23% of the individuals in Table 3 are female, and that they are absent from Group 1. Group rankings for the 10 publication characteristics are contrasted in Fig. 1. Although Group 1 clearly dominates ranks 1–8, chiefly among the first seven characteristics of Table 1, it also has a substantial number of ranks from 15 to 22. Group 2 dominates the middle rankings, and Group 3 has a distinct presence in ranks 19–22, but with occasional ranks all the way back to Group 1. A surprising result of this study is the lack of correlation between the number of articles cited at least once and the number of years an individual has been publishing (R2 = 0.01). Moreover, number of citations is not significantly related to number of articles (R2 = 0.15). This comes because of the gradual shift over time from articles by one or two authors to articles by teams often running into double digits (Gorham and Kelly 2014). One relationship among the characteristics is especially apparent. Fig. 2 shows a linear standard major axis, with neither variable dependent (Ricker 1984), demonstrating the positive relationship between the square root of citation counts and the h index. The two measures are so closely correlated (R2 = 0.88) that probably only one should be used in judging accomplishments. It is also noteworthy that four individuals cited for 47–67 year ranged in sole authorship from 24% to 48%, whereas the other 18 individuals, cited for 27–45 year, ranged from 2% to 22%. To examine this further, I took the average authorship ranking for each individual's top 20 team citations, and related it to percentage sole authorship. The correlation is very modest (R2 = 0.28), but for the individual with the highest sole-author percentage (48), the average rank is 1.8, whereas for the individual with the lowest sole-author percentage (2.1), the average rank is 5.6. Among members of the NAS, there is a distinct outlier (namely, me) with the lowest number of citations (Table 3) and also by far the longest record of publication. My first cited publication was in 1948; among other members of FESA + NAS, the earliest cited publication was in 1965. Being reluctant to consider that inadequate screening might have allowed my membership in the NAS, I shall suggest alternatives that may apply to other “overmature” ecologists. First, my most influential studies (on acid rain and the great importance of atmospheric deposition to ecosystems on poor soils) were done in the 1950s and 1960s when ecologists, ecological journals, articles, and citations were far fewer than at present (Gorham and Kelly 2014). Moreover, most of my more recent research has been on northern peatlands, a focus for relatively few ecologists, despite such peatlands being a major sink in the global carbon cycle. They are also at risk from climate warming (Gorham 1991). Second, few scientists seem to be sufficiently concerned with the history of their discipline (pace Frank Egerton) to cite it regularly in their articles, so that most science articles peak within 2–4 years, go out of fashion, and reach low levels of citation within a decade or two, as I showed long ago for limnology (Gorham 1968). Third, I am sole author of 48% of my publications, as against a median of 18% for all 22 members, and so have relatively few team citations. But enough of my apologia! As to what the various characteristics might indicate, independence might best be reflected by percentage sole authorship, whereas leadership might be reflected by percentage first authorship in articles with more than one author. Citations going beyond first authorship, however, are more and more likely to reflect utility to other ecologists as a collaborator. Total citation counts are an indication of overall utility to the community of ecologists, either as leader or as collaborator. In conclusion, it appears that the square root of number of citations or the h index, as presented in Fig. 2, can provide a broad but nevertheless helpful background for the evaluation of both individuals and departments of ecology. Individual ecologists may, however, find it more useful to compare their different publication characteristics with the ranges for FESA and FESA + NAS, bearing in mind that the great diversity of publication characteristics reflects great diversity in the ways ecologists pursue research, which have changed considerably over time (Gorham and Kelly 2014). The wide ranges for citation counts, number of articles published, etc., within each of two groups of highly reputable ecologists suggest the need for caution in their use for evaluating merit. As an example of the occasional extreme difficulty in judging the worth of single characteristics consider two individuals, the first having his name on well over 400 and the second on well over 500 articles. The first has published 51% and the other only 7% as sole or first author. Moreover, the first has an average rank of 2.25 on his 20 highest team citations, whereas the second has an average rank of 5.60. However, the first individual has a citation count of only 36,400 as against 60,600 for the second. How should these characteristics be balanced against one another? We should also remember that citations counts do not necessarily reflect the originality of the research cited (Gorham and Kelly 2014), nor, as suggested by my colleague Clarence Lehman, whether citations reflect a generally favorable or unfavorable view of the research cited. A major practical limitation to provide a background of publication characteristics to assess merit is the need to produce that background at or near the time of evaluation, because the numbers inevitably shift with time. I thank Clarence Lehman for advice and help, and Julia Kelly for assistance. The data table on which this article is based, in which numbers are substituted for names, is available from the author.